51 research outputs found

    Interaction between a Human and an Anthropomorphized Object

    Get PDF

    Bayesian Inference of Self-intention Attributed by Observer

    Full text link
    Most of agents that learn policy for tasks with reinforcement learning (RL) lack the ability to communicate with people, which makes human-agent collaboration challenging. We believe that, in order for RL agents to comprehend utterances from human colleagues, RL agents must infer the mental states that people attribute to them because people sometimes infer an interlocutor's mental states and communicate on the basis of this mental inference. This paper proposes PublicSelf model, which is a model of a person who infers how the person's own behavior appears to their colleagues. We implemented the PublicSelf model for an RL agent in a simulated environment and examined the inference of the model by comparing it with people's judgment. The results showed that the agent's intention that people attributed to the agent's movement was correctly inferred by the model in scenes where people could find certain intentionality from the agent's behavior

    Gestures for Manually Controlling a Helping Hand Robot

    Get PDF
    Helping hand robots have been the focus of a number of studies and have high potential in modern manufacturing processes and for use in daily living. As helping hand robots interact closely with users, it is important to find natural and intuitive user interfaces for interacting with the robots in various situations. This study describes a set of gestures for interacting with and controlling helping hand robots in situations in which users need to manually control the robot but both hands are not available, for example, when users are holding tools or objects in their hands. The gestures are derived from an experimental study that asked participants for gestures suitable for controlling primitive robot motions. The selected gestures can be used to control translation and orientation of an end effector of a helping hand robot when one or both hands are engaged with tasks. As an example for validating the proposed gestures, we implemented a helping hand robot system to perform a soldering task

    Effects of appearance and gender on pre-touch proxemics in virtual reality

    Get PDF
    Virtual reality (VR) environments are increasingly popular for various applications, and the appearance of virtual characters is a critical factor that influences user behaviors. In this study, we aimed to investigate the impact of avatar and agent appearances on pre-touch proxemics in VR. To achieve this goal, we designed experiments utilizing three user avatars (man/woman/robot) and three virtual agents (man/woman/robot). Specifically, we measured the pre-touch reaction distances to the face and body, which are the distances at which a person starts to feel uncomfortable before being touched. We examined how these distances varied based on the appearances of avatars, agents, and user gender. Our results revealed that the appearance of avatars and agents significantly impacted pre-touch reaction distances. Specifically, those using a female avatar tended to maintain larger distances before their face and body to be touched, and people also preferred greater distances before being touched by a robot agent. Interestingly, we observed no effects of user gender on pre-touch reaction distances. These findings have implications for the design and implementation of VR systems, as they suggest that avatar and agent appearances play a significant role in shaping users’ perceptions of pre-touch proxemics. Our study highlights the importance of considering these factors when creating immersive and socially acceptable VR experiences

    Reading a Robot's Mind: A Model of Utterance Understanding based on the Theory of Mind Mechanism

    No full text
    The purpose of this paper is to construct a methodology for smooth communications between humans and robots. Here, focus is on a mindreading mechanism, which is indispensable in humanhuman communications. We propose a model of utterance understanding based on this mechanism. Concretely speaking, we apply the model of a mindreading system (Baron-Cohen 1996) to a model of human-robot communications. Moreover, we implement a robot interface system that applies our proposed model. Psychological experiments were carried out to explore the validity of the following hypothesis: by reading a robot's mind, a human can estimate the robot's intention with ease, and, moreover, the person can even understand the robot's unclear utterances made by synthesized speech sounds. The results of the experiments statistically supported our hypothesis

    A Speaking Technique for Self-Driven Interactive Robot

    No full text
    corecore